In my recent talk at Educating for Success, I invited the audience to consider how AI tools could be used in university assessments. The video is below.  

I spent a whole afternoon the day before the talk generating images using an AI tool Midjourney, so even if you don’t hear anything in the video below, you’ll at least get a sense of what ‘text-to-image’ tools can currently do for the first-time user 😉

This is a follow-up post to my talk with more details on what integrating AI tools into assessments might actually look like.  

I suggest a 3-stage approach, because…. 3 is a nice number and I couldn’t think of more ideas, so please do get in touch if you have more thoughts on what assessments in the AI world might look like.  

Stage 1: Create and critically evaluate AI output  

Step 1: Educators generate topic options using ChatGPT 

AI output is a bit like Aldi specials- some will be great, but not all of them.  

I generated some prompts from different disciplines and asked colleagues around campus to see/critique. An overall agreement was that, while some prompts were a bit strange, the majority were legit, and some were even pretty good.    

E.g. Psychology (examples below were generated by ChatGPT free version in April 2023) 

  1. Diagnosis of Mental Health Disorders: Students can compare the accuracy of an AI algorithm’s diagnosis of mental health disorders with that of a human clinician. They can evaluate the strengths and weaknesses of each approach, and determine which one is more effective in different situations. For example, they can examine whether AI is better at detecting certain disorders, while human clinicians may be better at identifying more complex or nuanced symptoms. 
  1. Behavioral Analysis: Students can compare the accuracy of an AI model’s predictions for behavioral analysis tasks, such as identifying patterns of behavior associated with mental health disorders, with those generated by human experts. They can also evaluate the strengths and weaknesses of each approach and determine which one is more effective in different situations. 

E.g. Geology (examples below were generated by ChatGPT free version in April 2023) 

  1. Mineral Identification: Students can analyse the output of AI models that are used to identify minerals in rock samples. They can compare the accuracy of the AI model’s predictions with those generated by traditional methods, such as visual inspection and chemical analysis. They can also evaluate the strengths and weaknesses of each approach and determine which one is more effective in different situations. 
  1. Data Preprocessing: Students can analyse the role of data preprocessing in mineral identification using AI. They can examine how different techniques for cleaning, transforming, and normalising data can impact the accuracy of AI models for mineral identification. They can also compare the performance of AI models with and without data preprocessing. 
  1. Feature Extraction: Students can analyse the role of feature extraction in mineral identification using AI. They can examine how different techniques for selecting and extracting features from mineral samples can impact the accuracy of AI models. They can also compare the performance of AI models with different feature extraction techniques. 
  1. Model Selection: Students can analyse the role of model selection in mineral identification using AI. They can examine how different types of AI models, such as neural networks, support vector machines, and decision trees, can impact the accuracy of mineral identification. They can also compare the performance of AI models with different model selection techniques. 
  1. Uncertainty Analysis: Students can analyse the uncertainty associated with mineral identification using AI. They can examine how different sources of uncertainty, such as noise in the data, model error, and parameter uncertainty, can impact the accuracy of AI models. They can also compare the performance of AI models with different uncertainty analysis techniques. 

Step 2: Students select topic options that most appeal to them  

At least 2 people OR teams should have the same topic as they’ll swap the AI output at a later stage.  

Step 3: Students generate AI output  

Students (individually or in pairs) use AI tools (e.g. ChatGPT free version) to generate outputs.  

The challenge is to generate as good of an output as they can (some of the marks can be allocated on the quality of the generated outputs). Students keep the record (e.g. screenshots) of the prompts and share them in iLearn forum. The additional goal of the forum is to assist each other with good prompt generation.  

Step 4: Self -assessment  

Students reflect on their work/process and complete a self-assessment/reflection. They will include these in their draft.  

Step 5: Peer feedback 

Students with the same topic swap outputs and (i) rate the quality of the outputs; (ii) respond to student self-assessment and reflection and (iii) analyse and evaluate the outputs for strengths/weaknesses. A large part of the analyses and evaluation can be done during tutorials (in real time) to increase learning opportunities and assure learning.  

Step 6: Actioning feedback  

Students act on the feedback that they’ve received and keep records of their improvements.  

Step 7: Interviews  

Students have interviews with the educator about their project and findings. The interview will act as assurance as learning.   

Stage 2: Improve A.I output  

Step 1: Improving output with record-keeping  

Students can continue the projects from Stage 1 and improve the quality of AI output. It’s important that students keep a record of their process, as the process will be evaluated as well as the product.  

Step 2: Self-assessment and reflection  

Students reflect on their work/process and complete a self-assessment/reflection. They will include these in their draft.  

Step 3: Peer review 

Students provide others who have worked on similar topics with feedback on their process and product. Like in Stage 1, students will be evaluated on the quality of their feedback to others.  

Step 4: Actioning feedback and reflection  

Students act on the feedback that they’ve received and keep record of their improvements  

Step 5: Output submission  

Students submit improved output, their record of improvements, the feedback that they provided and received, and actions taken for evaluation. The bulk of marks will be given for process rather than product.  

Step 6: Interviews  

Oral interviews throughout the process can both assure learning and act as feedforward.  

Stage 3: Complex/Real-life assessments  

Step 1: Topic selection  

Students can be presented with real-life problems:

E.g. The field of Education (examples below were generated by ChatGPT free version in April 2023) 

  1. Addressing the achievement gap: Students can investigate the achievement gap between different student demographics and propose solutions for improving equity and access to educational opportunities. They could research the history of educational inequality, analyse data on NAPLAN and AEDI student performance, and work with community organisations and policymakers to develop and implement a plan. 
  1. Reducing bullying and harassment: Students can investigate the problem of bullying and harassment in schools and propose solutions for reducing and preventing this behaviour. They could research the psychology of bullying, analyse data on incidents of bullying and harassment, and work with school counsellors, school leaders, and community organisations to develop and implement a plan. 
  1. Improving teacher retention: Students can investigate the problem of teacher turnover and propose solutions for improving job satisfaction and retention rates. They could research the factors that contribute to teacher burnout and turnover, analyse data on teacher retention rates, and work with relevant jurisdiction authorities and policymakers to develop and implement a plan.

At least 3 students or teams should be looking at the same topic, as they’ll act as peer reviewers for each other. 

Step 2: Project investigation  

Students use any tool, including AI tools, to investigate the problems (Note: students need to attribute the work of AI where used). The lack of attribution will be marked down. When AI tools are used, students need to fact-check and verify the information since A.I can produce inaccurate outputs.  

Step 3: Communicating results 

Students prepare infographics and oral presentations (e.g., recorded videos of themselves) talking about their findings and proposals. Students are provided with marks for the clarity of their communication (rather than content).

Step 4: Self-assessment/reflection  

Students reflect on their work/process and complete a self-assessment/reflection and keep record of these for assessment.  

Step 5: Peer feedback  

Other teams that have investigated the same topics provide critical feedback. They keep track of the feedback and submit it for assessment (students get marks for the quality of their feedback). 

Students act on the peer reviewers’ feedback and create summary of their changes and things they’ve learnt. 

Step 6: Interviews  

Students are assessed on the process (feedback that they provided, actions taken in response to feedback and critical reflections) as opposed to the product.  


Want more?

UNESCO Quick Guide ChatGPT and Artificial Intelligence in higher education  

A Proposed AI Literacy Framework for Higher education.

Other posts in the Generative AI Series

Image credits: Images were generated by the author using Midjourney AI ‘text-to-image’ tool in May 2023.

Posted by Olga Kozar

I'm a 'long-term' Mq girl. I did my PhD here and taught on different courses, ranging from 1st year to PhD students. I now work in Learning and Teaching, which I love. I have 2 young kids and a dog, and I love meeting other Mq people, so give me a shout if you'd like to talk 'learning and teaching' or would like to brainstorm together.

Leave a reply

Your email address will not be published. Required fields are marked *